Levenberg–Marquardt multi-classification using hinge loss function

نویسندگان

چکیده

Incorporating higher-order optimization functions, such as Levenberg–Marquardt (LM) have revealed better generalizable solutions for deep learning problems. However, these functions suffer from very large processing time and training complexity especially datasets become large, in multi-view classification problems, where finding global optima is a costly problem. To solve this issue, we develop solution LM-enabled with, to the best of knowledge first-time implementation hinge loss, multiview classification. Hinge loss allows neural network converge faster perform than other logistic or square rates. We prove our method by experimenting with various multiclass challenges varying data size. The empirical results show accuracy rates achieved, highlighting how outperforms all cases, when limited. Our paper presents important relationship between can impact

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Classification with a Reject Option using a Hinge Loss

We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in supp...

متن کامل

Statistical Tests Using Hinge/ε-Sensitive Loss

Abstract. Statistical tests used in the literature to compare algorithms use the misclassification error which is based on the 0/1 loss and square loss for regression. Kernel-based, support vector machine classifiers (regressors) however are trained to minimize the hinge ( -sensitive) loss and hence they should not be assessed or compared in terms of the 0/1 (square loss) but with the loss meas...

متن کامل

BAYES ESTIMATION USING A LINEX LOSS FUNCTION

This paper considers estimation of normal mean ? when the variance is unknown, using the LINEX loss function. The unique Bayes estimate of ? is obtained when the precision parameter has an Inverse Gaussian prior density

متن کامل

Optimizing the Classification Cost using SVMs with a Double Hinge Loss

The objective of this study is to minimize the classification cost using Support Vector Machines (SVMs) Classifier with a double hinge loss. Such binary classifiers have the option to reject observations when the cost of rejection is lower than that of misclassification. To train this classifier, the standard SVM optimization problem was modified by minimizing a double hinge loss function consi...

متن کامل

Multi-Group Classification Using Interval Linea rProgramming

  Among various statistical and data mining discriminant analysis proposed so far for group classification, linear programming discriminant analysis has recently attracted the researchers’ interest. This study evaluates multi-group discriminant linear programming (MDLP) for classification problems against well-known methods such as neural networks and support vector machine. MDLP is less compli...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Neural Networks

سال: 2021

ISSN: ['1879-2782', '0893-6080']

DOI: https://doi.org/10.1016/j.neunet.2021.07.010